-
-
Notifications
You must be signed in to change notification settings - Fork 257
Context for chatbot was compressed #688
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Integrates Svelte into the Astro project by adding @astrojs/svelte and related dependencies, updating configuration files, and introducing a mock Chat.svelte component. Adds a new interactive-help.astro page that renders the chat UI, preparing for future WebLLM integration.
Added prettier-plugin-svelte to devDependencies for improved Svelte formatting. Updated codebase to use consistent double quotes and improved formatting in Astro and Svelte config files, as well as minor whitespace and formatting adjustments in markdown and Astro files for consistency.
Refactored Chat.svelte to use English for UI text and comments, simplified message creation with a helper function, and improved code readability. Updated placeholder and button text, and streamlined the mock response logic. Minor formatting adjustments were made in index.astro for consistency.
Added @mlc-ai/web-llm as a dependency and introduced llm.ts to manage LLM state and initialization in Svelte. Refactored Chat.svelte to use WebLLM for assistant responses, including streaming support, user consent for model download, and FAQ knowledge base preloading as a system prompt. Improved UI to handle LLM loading states, errors, and user interaction for model initialization.
Rewrote comments and UI text in Chat.svelte from Japanese to English, removed the FAQ preloading logic, and improved the system prompt for clearer assistant behavior. Updated interactive-help.astro to add a 'Back to the top page' link and use environment-based URL joining. These changes enhance clarity, maintainability, and user experience for English-speaking users.
Introduces a prominent 'Ask Anything (Experimental)' button on the homepage linking to the interactive help chat. Updates the initial assistant greeting in Chat.svelte for a more welcoming tone. Adds experimental, privacy, and accuracy disclaimers to the interactive help page for user awareness.
The FAQ.md was rewritten to focus on concise, LLM-friendly installation Q&A for all supported platforms, replacing the previous 50-question format. The Ubuntu, Flatpak, and Windows install pages were updated to use the <Code> component for command snippets and now provide clearer, step-by-step instructions. The Ubuntu page now dynamically fetches the latest .deb asset and offers both GUI and terminal install options. The macOS page was mistakenly updated to reference the Ubuntu .deb asset instead of the correct macOS package, which should be reviewed.
Improves code style and formatting across Svelte and Astro components, enhances the Chat UI for better streaming and error handling, and updates the default LLM model to Llama-3.2-1B-Instruct-q4f16_1-MLC. Also adds Svelte support to Prettier config and applies consistent code formatting to installation instructions.
Collaborator
Author
|
@royshil ping |
royshil
approved these changes
Nov 10, 2025
Owner
royshil
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool
sobalap
pushed a commit
to sobalap/obs-backgroundremoval
that referenced
this pull request
Jan 7, 2026
* Add Svelte support and interactive chat mock UI Integrates Svelte into the Astro project by adding @astrojs/svelte and related dependencies, updating configuration files, and introducing a mock Chat.svelte component. Adds a new interactive-help.astro page that renders the chat UI, preparing for future WebLLM integration. * Add prettier-plugin-svelte and update code formatting Added prettier-plugin-svelte to devDependencies for improved Svelte formatting. Updated codebase to use consistent double quotes and improved formatting in Astro and Svelte config files, as well as minor whitespace and formatting adjustments in markdown and Astro files for consistency. * Refactor Chat UI to use English and simplify logic Refactored Chat.svelte to use English for UI text and comments, simplified message creation with a helper function, and improved code readability. Updated placeholder and button text, and streamlined the mock response logic. Minor formatting adjustments were made in index.astro for consistency. * Integrate WebLLM with Svelte chat UI and preload FAQ Added @mlc-ai/web-llm as a dependency and introduced llm.ts to manage LLM state and initialization in Svelte. Refactored Chat.svelte to use WebLLM for assistant responses, including streaming support, user consent for model download, and FAQ knowledge base preloading as a system prompt. Improved UI to handle LLM loading states, errors, and user interaction for model initialization. * Refactor Chat UI and add English localization Rewrote comments and UI text in Chat.svelte from Japanese to English, removed the FAQ preloading logic, and improved the system prompt for clearer assistant behavior. Updated interactive-help.astro to add a 'Back to the top page' link and use environment-based URL joining. These changes enhance clarity, maintainability, and user experience for English-speaking users. * Add interactive help entry point and disclaimers Introduces a prominent 'Ask Anything (Experimental)' button on the homepage linking to the interactive help chat. Updates the initial assistant greeting in Chat.svelte for a more welcoming tone. Adds experimental, privacy, and accuracy disclaimers to the interactive help page for user awareness. * Revise FAQ and improve OS install instructions The FAQ.md was rewritten to focus on concise, LLM-friendly installation Q&A for all supported platforms, replacing the previous 50-question format. The Ubuntu, Flatpak, and Windows install pages were updated to use the <Code> component for command snippets and now provide clearer, step-by-step instructions. The Ubuntu page now dynamically fetches the latest .deb asset and offers both GUI and terminal install options. The macOS page was mistakenly updated to reference the Ubuntu .deb asset instead of the correct macOS package, which should be reviewed. * Refactor Chat UI and update LLM model version Improves code style and formatting across Svelte and Astro components, enhances the Chat UI for better streaming and error handling, and updates the default LLM model to Llama-3.2-1B-Instruct-q4f16_1-MLC. Also adds Svelte support to Prettier config and applies consistent code formatting to installation instructions. * Update llm.ts * Update FAQ.md * Update FAQ.md * Update FAQ.md * Update Chat.svelte
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I have compressed the context of chatbot and choose the smaller model for faster talk.